-
Notifications
You must be signed in to change notification settings - Fork 9.1k
HADOOP-19672 Nework switchover when apache client throw error #7967
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: trunk
Are you sure you want to change the base?
Conversation
🎊 +1 overall
This message was automatically generated. |
🎊 +1 overall
This message was automatically generated. |
============================================================
|
builder.setConnectionManager(connMgr) | ||
.setRequestExecutor( | ||
new AbfsManagedHttpRequestExecutor(abfsConfiguration.getHttpReadTimeout())) | ||
new AbfsManagedHttpRequestExecutor( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As we were discussing last time, if we keep it to read timeout the 100 continue timeout would become 30 seconds, this should be another config for 100 continue timeout
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, will create a new config for read timeout and use that when 100 continue is enabled.
DelegatingSSLSocketFactory.getDefaultFactory(), | ||
abfsConfiguration, keepAliveCache, baseUrl); | ||
abfsConfiguration, keepAliveCache, baseUrl, | ||
abfsConfiguration.getFsConfiguredServiceType() == abfsServiceType); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add some comments around this change as to how it would affect the need for cache warmup
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1
What are we trying to achieve here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason for this change: Since the keep alive cache is on client level and we were doing cache warmup for both the client separately. Now with this change, we will do cache warmup only for the default client, not for both the clients.
Will add the comment in the code as well
cacheExtraConnection(route, | ||
int totalConnectionsCreated = cacheExtraConnection(route, | ||
abfsConfiguration.getApacheCacheWarmupCount()); | ||
if (totalConnectionsCreated == 0) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
even if we fail or catch rejected exception for any one of the tasks we want to register fallback ? as the successfully submitted tasks might have increased the count of totalConnectionsCreated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, make sense. I have updated the returned value in case of rejected exception; other thing will remain the same.
new URL("https://test.com"), true); | ||
|
||
Assertions.assertThat(AbfsApacheHttpClient.usable()) | ||
.describedAs("Apache HttpClient should be not usable") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you make one http call and validate now jdk is being used, user agent can be used for validation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, will do that.
builder.setConnectionManager(connMgr) | ||
.setRequestExecutor( | ||
new AbfsManagedHttpRequestExecutor(abfsConfiguration.getHttpReadTimeout())) | ||
new AbfsManagedHttpRequestExecutor( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1
final AbfsClientContext abfsClientContext) throws IOException { | ||
super(baseUrl, sharedKeyCredentials, abfsConfiguration, tokenProvider, | ||
encryptionContextProvider, abfsClientContext); | ||
encryptionContextProvider, abfsClientContext, AbfsServiceType.BLOB); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why this change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As described in the previous comment, we need to find out default client and for that we are comparing these values with the default value configured.
DelegatingSSLSocketFactory.getDefaultFactory(), | ||
abfsConfiguration, keepAliveCache, baseUrl); | ||
abfsConfiguration, keepAliveCache, baseUrl, | ||
abfsConfiguration.getFsConfiguredServiceType() == abfsServiceType); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1
What are we trying to achieve here?
final SASTokenProvider sasTokenProvider, | ||
final EncryptionContextProvider encryptionContextProvider, | ||
final AbfsClientContext abfsClientContext) throws IOException { | ||
initServiceType(abfsConfiguration); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why this change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will initialize the default and ingress service types. This is needed before crating the clients so that we can do cache warmup only for default client.
|
||
public static final HttpOperationType DEFAULT_NETWORKING_LIBRARY | ||
= HttpOperationType.JDK_HTTP_URL_CONNECTION; | ||
= HttpOperationType.APACHE_HTTP_CLIENT; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: We should change this in our md file as well
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, will make the change in .md file as well
JIRA – https://issues.apache.org/jira/browse/HADOOP-19672
In case of a network error while using the Apache client, we allow the client to switch over from Apache to JDK. This network fallback occurs in two scenarios:
This fallback is applied at the JVM level, so all file system calls will use the JDK client once the switch occurs.
There is also a possibility of recovery. During cache warmup, if connections are successfully created using the Apache client, the system will automatically switch back to the Apache client.